Framework Document
A capacity-amplifying operating model that treats artificial intelligence as a commons, governed by the people it serves, oriented toward human flourishing, and accountable to the common good.
Inspired by the work of Dr. Carlton L. Robinson, DBA, whose Future of Work with AI Economic Development Model framed Human-Centered AI as applied public infrastructure and provided the conceptual foundation this framework builds upon. The layered infrastructure model, the civic framing of AI as commons, and the emphasis on human agency preservation all trace directly to his thinking.
“Human-centered AI public infrastructure treats AI as a capacity-amplifier: expanding what people can do, decide, and become, without displacing human judgment, dignity, or agency at the center.”
01 — Foundation
This framework rejects a purely preference-centered model, which optimizes for what people currently want, in favor of a development-centered orientation. The infrastructure serves humans not as they are, but as they are capable of becoming. AI is a tool, not an end.
Rejected
Preference-centered. AI flatters and follows existing wants.
Adopted
Capacity-amplifying. AI expands what people can do and become.
The distinction matters: a preference-centered infrastructure optimizes for the signal, which often works against growth. A development-centered one has a normative orientation; it cares about where the human is moving, not just what they are clicking.
02 — Framing
Calling this infrastructure is a moral and political claim, not just a technical one. Infrastructure implies universal access, public accountability, long-term orientation, and facilitation oriented toward the common good, not toward shareholder interest or engagement metrics.
| Product model | Infrastructure model |
|---|---|
| You are the user | You are the citizen |
| Optimized for engagement | Optimized for capacity |
| Access tied to payment | Access tied to membership in society |
| Governed by company | Governed by commons |
| Failure is a bug | Failure is a public harm |
Neutral facilitation, in this framework, is oriented toward the common good, holding all people along the journey collectively. Pure neutrality is a myth. The infrastructure has values; it makes them explicit and collective rather than hidden and corporate.
03 — Architecture
Four interdependent layers govern how the infrastructure actually runs. Each layer must answer to the stakeholder map: access for whom, governance by whom, accountability to whom, feedback from whom.
Access layer
Universal interfaces designed for varying literacy, language, and ability. No premium tiers that gate core capacity. Local touchpoints, not everything routed through a central platform.
Governance layer
Community representation in policy, not just expert panels. Transparent decision logs. Appeals and redress mechanisms that people can actually use.
Accountability layer
Public audits, not internal reviews. Accessible harm reporting. Consequences that are real, not just PR responses.
Feedback and adaptation layer
Continuous community input loops. Mechanisms to sunset what isn't working. Active protection against capture by corporations, governments, or majorities.
04 — Stakeholders
In a public infrastructure model, stakeholders are not merely users; they are in relationship with the system at different levels of power. Three categories must be mapped before any governance decisions are made.
Those who are served: Individuals seeking capacity amplification, communities with historically limited access, future generations who inherit what is built.
Those who govern: Elected or representative bodies, community stewards and civil society, technical custodians with authority but not supremacy.
Those at risk of harm: Marginalized communities first experimented on and last to benefit, workers displaced by the infrastructure itself, dissenting voices who don't share the majority's definition of flourishing.
The critical question: who decides what “common good” means, and through what process? Whoever answers that question is the infrastructure’s real governing layer.
05 — Rules
These are the rules the infrastructure must obey, non-negotiable even under pressure to compromise. Every principle either expands human agency, protects it, or refuses to trade it away for efficiency.
Amplify, don't replace. AI enhances human judgment and capacity. It never substitutes for human decision-making where dignity, rights, or development are at stake.
Access is a right, not a reward. No means-testing for core infrastructure. Capacity-building tools belong to everyone by virtue of being human.
Transparency as default. How decisions are made, how data is used, how models were trained: visible, not buried in terms of service.
Community defines flourishing. No central authority imposes what growth or development means. Communities participate in defining the outcomes the infrastructure orients toward.
Harm visibility. The infrastructure actively surfaces who it is failing, not just who it is serving. Failure is data, not embarrassment.
Reversibility. Nothing is locked in permanently. Policies, models, interfaces: all subject to revision through legitimate collective process.
The common good has edges. The infrastructure explicitly protects minority voices from majority capture. Common good does not mean majority preference wins.
06 — Measurement
The temptation in any infrastructure is to measure what is easy: speed, volume, cost. But those metrics quietly smuggle in a value system that has nothing to do with human development. A system can be highly efficient and deeply harmful at the same time.
Capacity expansion
Are people able to do, decide, and become more than they could before?
Equity of access
Is the infrastructure closing gaps or widening them across income, geography, language, ability?
Agency preservation
Is human judgment staying central? Are dependency patterns growing or shrinking?
Community legitimacy
Do the people served trust and recognize themselves in the infrastructure?
Diagnostic test for any metric added: does this number tell us the infrastructure is serving humans, or just running smoothly?
07 — Safeguards
Every infrastructure eventually gets stress-tested. The question is not if it will fail; it is which failure you are most vulnerable to and whether you saw it coming.
Capture. The infrastructure gets colonized by a powerful interest. Safeguard: structural separation between funders and governance; no single entity holds disproportionate influence over policy or model development.
Exclusion drift. Works beautifully for the majority and becomes invisible to everyone else. Safeguard: equity metrics from Layer 06 are binding, not aspirational. Triggers mandatory intervention when gaps exceed defined thresholds.
Paternalism creep. The infrastructure starts deciding what is good for people rather than amplifying what people decide for themselves. Safeguard: community definition of flourishing is revisited on a fixed cycle; no permanent consensus, dissent built into the process.
Complexity collapse. The system becomes so layered that ordinary people can no longer meaningfully govern it. Safeguard: radical simplicity requirement at every governance layer; if a community member cannot understand how a decision was made, the process fails the transparency principle.
The meta-safeguard: the infrastructure must be harder to capture than it is to participate in. If participation is expensive and capture is cheap, the framework is already compromised at the design level.
08 — Architecture
This framework is grounded in Integral Theory: philosopher Ken Wilber’s AQAL model, which maps every human experience across four irreducible dimensions. Most AI frameworks live in one dimension. This one attempts all four.
The four quadrants represent the interior and exterior of both the individual and the collective. Ignore any one and you have a partial map, which means partial solutions and predictable blind spots.
Upper Left — I
Individual interior. Consciousness, intention, agency. Where the philosophical foundation lives: the human's inner relationship to AI and their own judgment.
Upper Right — It
Individual exterior. Behavior, action, skills. Where agent behavior lives: how the human actually uses tools and builds capacity over time.
Lower Left — We
Collective interior. Culture, shared meaning, community values. Where the stakeholder map lives: who defines flourishing and whose voice shapes the infrastructure.
Lower Right — Its
Collective exterior. Systems, institutions, structures. Where the operating model lives: governance, access layers, metrics, and failure modes.
Each failure mode maps to a quadrant pathology, a way that dimension goes wrong. Capture is a Lower Right failure. Exclusion drift is a Lower Left failure. Paternalism creep is an Upper Left failure. Complexity collapse is a Lower Right failure. An integral approach means addressing the pathology at the level of its quadrant, not applying structural fixes to cultural problems, or interior fixes to systemic ones.
Most AI development is heavily weighted toward the Lower Right: systems, infrastructure, scale, with almost no engagement with the Upper Left and Lower Left. That is why technically sophisticated AI so often feels humanly impoverished. A complete map requires all four quadrants.
The seven framework layers map across the quadrants as follows:
| Framework layer | Quadrant |
|---|---|
| 01 — Philosophical foundation | Upper Left — consciousness and agency |
| 02 — Infrastructure framing | Lower Right — systems and institutions |
| 03 — Operating model | Lower Right — governance and structure |
| 04 — Stakeholder map | Lower Left — culture and collective identity |
| 05 — Design principles | All four — each principle anchored to a quadrant |
| 06 — Metrics | Upper Right and Lower Left — behavior and culture |
| 07 — Failure modes | All four — each failure mode is a quadrant pathology |
For the full philosophical architecture, including the levels and lines dimensions of Integral Theory, see INTEGRAL.md in the integral-ai-commons repository.
09 — Organization
A framework that lives only in documents changes nothing. The organizational layer is where these principles move from philosophy into practice, inside teams, businesses, and communities that are actively working out what human-centered AI means for them.
The organizational layer consists of four components, designed to be worked through in sequence. They can be used independently, but they are most powerful together.
Organization setup
A guided process for defining your operating model. Four questions: who are we, what does good look like for us, what never gets delegated to AI, and who might be left out. The answers become the foundation everything else is built on.
Organization CLAUDE.md
A template that translates your setup answers into an agent-readable operating model. Every AI session in your organization starts with a clear understanding of who you are, who you serve, and what decisions stay with humans.
Team onboarding
A 90-minute facilitated session for introducing the operating model to your team. Built around honest conversation, not compliance training. Works for five people or fifty. No technical background required.
Ongoing assessment
A regular review process, run at 30 days, 90 days, and every six months, that measures whether the operating model is actually working across all four dimensions: capacity expansion, equity of access, agency preservation, and community legitimacy.
The organizational layer is where Carlton Robinson’s civic infrastructure vision meets the daily reality of a team, a business, or a community trying to use AI well. The framework provides the map. The organizational layer is how you walk the territory.
All four components are available as open-source templates in the /org directory of the integral-ai-commons repository. Organizations implementing this through Nysteria receive a facilitated version customized to their context.